161 research outputs found

    A horizontally-scalable multiprocessing platform based on Node.js

    Full text link
    This paper presents a scalable web-based platform called Node Scala which allows to split and handle requests on a parallel distributed system according to pre-defined use cases. We applied this platform to a client application that visualizes climate data stored in a NoSQL database MongoDB. The design of Node Scala leads to efficient usage of available computing resources in addition to allowing the system to scale simply by adding new workers. Performance evaluation of Node Scala demonstrated a gain of up to 74 % compared to the state-of-the-art techniques.Comment: 8 pages, 7 figures. Accepted for publication as a conference paper for the 13th IEEE International Symposium on Parallel and Distributed Processing with Applications (IEEE ISPA-15

    Ad-Hoc File Systems At Extreme Scales

    Get PDF

    Feed-Forward-Only Training of Neural Networks

    Get PDF
    While artificial neural networks have reached immense advances over the last decade, the underlying approach to training neural networks, that is, solving the credit assignment problem by computing gradients with back-propagation, has remained largely the same. Nonetheless, back-propagation has long been criticized for being biologically implausible as it relies on concepts that are not viable in the brain. With delayed error forward projection (DEFP), I introduce a feed-forward-only training algorithm that solves two core issues for biological plausibility: the weight transport and the update locking problem. It is based on the similarly plausible direct random target projection algorithm but improves the approximated gradients by using delayed error information as a sample-wise scaling factor in place of the targets. By evaluating delayed error forward projection on image classification with fully-connected and convolutional neural networks, I find that it can achieve higher accuracy than direct random target projection, especially for fully- connected networks. Interestingly, scaling the updates with the error yields significantly better results than scaling with the gradient of the loss for all networks and datasets. In total, delayed error forward projection demonstrates the applicability of feed-forward-only training algorithms. This offers exciting new possibilities for both in-the-loop training on neuromorphic devices and pipelined parallelization

    Large-Scale Data Management and Analysis (LSDMA) - Big Data in Science

    Get PDF

    Helmholtz Portfolio Theme Large-Scale Data Management and Analysis (LSDMA)

    Get PDF
    The Helmholtz Association funded the "Large-Scale Data Management and Analysis" portfolio theme from 2012-2016. Four Helmholtz centres, six universities and another research institution in Germany joined to enable data-intensive science by optimising data life cycles in selected scientific communities. In our Data Life cycle Labs, data experts performed joint R&D together with scientific communities. The Data Services Integration Team focused on generic solutions applied by several communities

    Semi-Supervised Time Point Clustering for Multivariate Time Series

    Get PDF

    Semi-Supervised Time Point Clustering for Multivariate Time Series

    Get PDF
    • …
    corecore